53 research outputs found

    10411 Abstracts Collection -- Computational Video

    Get PDF
    From 10.10.2010 to 15.10.2010, the Dagstuhl Seminar 10411 ``Computational Video \u27\u27 was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Audio Resynthesis on the Dancefloor: A Music Structural Approach

    Get PDF
    This technical report improves and extends existing methods in the research area of audio resynthesis and retargeting and extends its usage scopes. The existing approach analyzes a musical piece for possible cut points that allow the resynthesis of a novel soundtrack by lining up the source segments according to specified rules. For the improvement of matching harmonic and rhythmic structures during cut points search, beat tracking is used as core component of this work. Segment rearrangement is improved by employing faster and better suited algorithms

    An X-ray, Optical and Radio Search for Supernova Remnants in the Nearby Sculptor Group Sd Galaxy NGC 7793

    Get PDF
    We present a multi-wavelength study of the properties of supernova remnants (SNRs) in the nearby Sculptor Group Sd galaxy NGC 7793. Using our own Very Large Array radio observations at 6 and 20 cm, as well as archived ROSAT X-ray data, previously published optical results and our own H-alpha image, we have searched for X-ray and radio counterparts to previously known optically-identified SNRs and for new previously unidentified SNRs at these two wavelength regimes. Only two of the 28 optically-identified SNRs are detected at another wavelength. The most noteworthy source in our study is N7793-S26, which is the only SNR that is detected at all three wavelengths. It features a long (approximately 450 pc) filamentary morphology that is clearly seen in both the optical and radio images. N7793-S26's radio luminosity exceeds that of the Galactic SNR Cas A, and based on equipartition calculations we determine that an energy of at least 10^52 ergs is required to maintain this source. A second optically identified SNR, N7793-S11, has detectable radio emission but is not detected in the X-ray. Complementary X-ray and radio searches for SNRs have yielded five new candidate radio SNRs, to be added to the 28 SNRs in this galaxy that have already been detected by optical methods. We find that the density of the ambient interstellar medium (ISM) surrounding these SNRs significantly impacts the spectral characteristics of SNRs in this galaxy, consistent with surveys of the SNR populations in other galaxies.Comment: 32 pages, 25 figures, to appear in the Astrophysical Journal (February 2002

    Geometry-based Automatic Object Localization and 3-D Pose Detection

    No full text
    Given the image of a real-world scene and a polygonal 3-D model of a depicted object, its apparent size, image coordinates, and 3-D orientation are autonomously detected. Based on matching silhouette outline to edges in the image, an extensive search in parameter space converges to the best-matching set of parameter values. Apparent object size may a-priori be unknown, and no initial search parameter values need to be provided. Due to its high degree of parallelism, the algorithm is well suited for implementation on graphics hardware to achieve fast object recognition and 3-D pose estimation

    Cloth Motion from Optical Flow

    No full text
    This paper presents an algorithm for capturing the motion of deformable surfaces, in particular textured cloth. In a calibrated multi-camera setup, the optical flow between consecutive video frames is determined and 3D scene flow is computed. We use a deformable surface model with constraints for vertex distances and curvature to increase the robustness of the optical flow measurements. Tracking errors in long video sequences are corrected by a silhouette matching procedure. We present results for synthetic cloth simulations and discuss how they can be extended to real-world footage

    Joint 3D-Reconstruction and Background Separation in Multiple Views using Graph Cuts

    No full text
    This paper deals with simultaneous depth map estimation and background separation in a multi-view setting with several fixed calibrated cameras, two problems which have previously been addressed separately. We demonstrate that their strong interdependency can be exploited elegantly by minimizing a discrete energy functional which evaluates both properties at the same time. Our algorithm is derived from the powerful "Multi-Camera Scene Reconstruction via Graph Cuts" algorithm recently presented by Kolmogorov and Zabih. Experiments with both real-world as well as synthetic scenes demonstrate that the presented combined approach yields even more correct depth estimates. In particular, the additional information gained by taking background into account increases considerably the algorithm's robustness against noise

    Multi-image Interpolation based on Graph-cuts and Symmetric Optical Flow

    No full text
    Figure 1: (a) Scene rendered by adaptively blending four forward-warped images as proposed by Stich et al. The image shows ghosting artifacts (red boxes) and appears blurry. (b) Our proposed graph-cut interpolation algorithm. Black areas indicate holes that are invisible in all input cameras. (c) The image overlaid with the optimal labeling, each color denotes a different source image. (d) Final result, invisible regions are filled by spatio-temporal image inpainting. Multi-image interpolation in space and time has recently received considerable attention. Typically, the interpolated image is synthesized by adaptively blending several forward-warped images. Blending itself is a low-pass filtering operation: the interpolated images are prone to blurring and ghosting artifacts as soon as the underlying correspondence fields are imperfect. We address both issues and propose a multi-image interpolation algorithm that avoids blending. Instead, our algorithm decides for each pixel in the synthesized view from which input image to sample. Combined with a symmetrical long-range optical flow formulation for correspondence field estimation, our approach yields crisp interpolated images without ghosting artifacts

    10411 Executive Summary -- Computational Video

    No full text
    Dagstuhl seminar 10411 "Computational Video" took place October 10-15, 2010. 43 researchers from North America, Asia, and Europe discussed the state- of-the-art, contemporary challenges and future research in imaging, processing, analyzing, modeling, and rendering of real-world, dynamic scenes. The seminar was organized into 11 sessions of presentations, discussions, and special-topic meetings. The seminar brought together junior and senior researchers from computer vision, computer graphics, and image communication, both from academia and industry to address the challenges in computational video. Participants included international experts from Kyoto University, Stanford University, University of British Columbia, University of New Mexico, University of Toronto, MIT, Hebrew University of Jerusalem, Technion - Haifa, ETH Zrich, Heriot-Watt Uni- versity - Edinburgh, University of Surrey, and University College London as well as professionals from Adobe Systems, BBC Research & Development, Disney Research and Microsoft Research

    A flexible and versatile studio for synchronized multi-view video recording

    No full text
    In recent years, the convergence of computer vision and computer graphics has put forth new research areas that work on scene reconstruction from and analysis of multi-view video footage. In free-viewpoint video, for example, new views of a scene are generated from an arbitrary viewpoint in real-time using a set of multi-view video streams as inputs. The analysis of real-world scenes from multi-view video to extract motion information or reflection models is another field of research that greatly benefits from high-quality input data. Building a recording setup for multi-view video involves a great effort on the hardware as well as the software side. The amount of image data to be processed is huge, a decent lighting and camera setup is essential for a naturalistic scene appearance and robust background subtraction, and the computing infrastructure has to enable real-time processing of the recorded material. This paper describes our recording setup for multi-view video acquisition that enables the synchronized recording of dynamic scenes from multiple camera positions under controlled conditions. The requirements to the room and their implementation in the separate components of the studio are described in detail. The efficiency and flexibility of the room is demonstrated on the basis of the results that we obtain with a real-time 3D scene reconstruction system, a system for non-intrusive optical motion capture and a model-based free-viewpoint video system for human actors
    • …
    corecore